1. Comparing test results
The test results are compared against the predicted results in the test scripts and checklists. If testing is being
done based on an exploratory technique, the tester will compare the outcome against the documented test basis, such as
the functional design or a requirements document. If there is no documented test basis, the tester needs to find other
ways of comparing the outcome. This information can be obtained, for example, from norms and standards, memos, user
manuals, interviews, advertisements or rival products (see also the tip in Tips - Absence Of Test Basis).
If there are no deviations, this is logged. If deviations are found, they are analysed. The comparing of the test
results often takes place simultaneously with the execution of the test. For example, by checking off the steps in the
test script it can be indicated whether a test result corresponds with the expected result. In certain cases, it is not
possible to do this during the test (e.g. with batch systems, where the output of several test cases is presented).
|
2. Analysing differences
The differences found are further analysed during this subactivity. The tester should perform the following steps:
-
Gather evidence
-
Reproduce the defect
-
Check for own mistakes
-
Determine suspected external cause
-
Isolate the cause (optional)
-
Generalise the defect
-
Compare with other defects
-
Write defect report
-
Have it reviewed.
These steps are explained in section "Finding a defect" in Defects Management. The steps are listed in the general sequence of execution, but it is entirely possible to carry out
particular steps in another order or in parallel. If, for example, the tester immediately sees that the defect was
already found in the same test, the interim steps need not be performed. In the test scripts, the numbers of the
defects are registered with those test cases where the defect was found. In that way, it quickly becomes clear in any
retest at least which test actions need to be carried out again. Various test tools are available both for
comparing the test results and for analysing the differences (see Test Tools).
|
3. Determining retests
Reasons for carrying out retests may be found defects. If the cause of the defect concerns a fault in the test execution,
the relevant test is carried out again. Defects that have their origin in a wrong test script or checklist are solved.
Thereafter, the changed part of the test script is executed again or the entire checklist is gone through again. Faults in
the test environment should also be solved, after which the relevant test scripts are executed again in their
entirety.
Faults in the test object or the test basis will usually mean a new version of the test object. With a fault in the test
basis, the associated test scripts will usually also need to be amended. This often involves a lot of work. When retests
take place, it is important to establish the way in which they are to be carried out. The test manager will determine in
the Control phase whether all the test scripts should be carried out again in whole or in part, and this partly depends on:
-
The exit criteria set out in the test plan
-
The severity of the defects
-
The number of defects
-
The degree to which the earlier execution of the test script was disrupted by the defects
-
The time available
-
The risks.
|
|